29 research outputs found

    Mode-Suppression: A Simple, Stable and Scalable Chunk-Sharing Algorithm for P2P Networks

    Full text link
    The ability of a P2P network to scale its throughput up in proportion to the arrival rate of peers has recently been shown to be crucially dependent on the chunk sharing policy employed. Some policies can result in low frequencies of a particular chunk, known as the missing chunk syndrome, which can dramatically reduce throughput and lead to instability of the system. For instance, commonly used policies that nominally "boost" the sharing of infrequent chunks such as the well known rarest-first algorithm have been shown to be unstable. Recent efforts have largely focused on the careful design of boosting policies to mitigate this issue. We take a complementary viewpoint, and instead consider a policy that simply prevents the sharing of the most frequent chunk(s). Following terminology from statistics wherein the most frequent value in a data set is called the mode, we refer to this policy as mode-suppression. We also consider a more general version that suppresses the mode only if the mode frequency is larger than the lowest frequency by a fixed threshold. We prove the stability of mode-suppression using Lyapunov techniques, and use a Kingman bound argument to show that the total download time does not increase with peer arrival rate. We then design versions of mode-suppression that sample a small number of peers at each time, and construct noisy mode estimates by aggregating these samples over time. We show numerically that the variants of mode-suppression yield near-optimal download times, and outperform all other recently proposed chunk sharing algorithms

    A Systematic Approach to Incremental Redundancy over Erasure Channels

    Full text link
    As sensing and instrumentation play an increasingly important role in systems controlled over wired and wireless networks, the need to better understand delay-sensitive communication becomes a prime issue. Along these lines, this article studies the operation of data links that employ incremental redundancy as a practical means to protect information from the effects of unreliable channels. Specifically, this work extends a powerful methodology termed sequential differential optimization to choose near-optimal block sizes for hybrid ARQ over erasure channels. In doing so, an interesting connection between random coding and well-known constants in number theory is established. Furthermore, results show that the impact of the coding strategy adopted and the propensity of the channel to erase symbols naturally decouple when analyzing throughput. Overall, block size selection is motivated by normal approximations on the probability of decoding success at every stage of the incremental transmission process. This novel perspective, which rigorously bridges hybrid ARQ and coding, offers a pragmatic means to select code rates and blocklengths for incremental redundancy.Comment: 7 pages, 2 figures; A shorter version of this article will appear in the proceedings of ISIT 201

    Tracking an Auto-Regressive Process with Limited Communication per Unit Time

    Full text link
    Samples from a high-dimensional AR[1] process are observed by a sender which can communicate only finitely many bits per unit time to a receiver. The receiver seeks to form an estimate of the process value at every time instant in real-time. We consider a time-slotted communication model in a slow-sampling regime where multiple communication slots occur between two sampling instants. We propose a successive update scheme which uses communication between sampling instants to refine estimates of the latest sample and study the following question: Is it better to collect communication of multiple slots to send better refined estimates, making the receiver wait more for every refinement, or to be fast but loose and send new information in every communication opportunity? We show that the fast but loose successive update scheme with ideal spherical codes is universally optimal asymptotically for a large dimension. However, most practical quantization codes for fixed dimensions do not meet the ideal performance required for this optimality, and they typically will have a bias in the form of a fixed additive error. Interestingly, our analysis shows that the fast but loose scheme is not an optimal choice in the presence of such errors, and a judiciously chosen frequency of updates outperforms it

    Delay-sensitive Communications Code-Rates, Strategies, and Distributed Control

    Get PDF
    An ever increasing demand for instant and reliable information on modern communication networks forces codewords to operate in a non-asymptotic regime. To achieve reliability for imperfect channels in this regime, codewords need to be retransmitted from receiver to the transmit buffer, aided by a fast feedback mechanism. Large occupancy of this buffer results in longer communication delays. Therefore, codewords need to be designed carefully to reduce transmit queue-length and thus the delay experienced in this buffer. We first study the consequences of physical layer decisions on the transmit buffer occupancy. We develop an analytical framework to relate physical layer channel to the transmit buffer occupancy. We compute the optimal code-rate for finite-length codewords operating over a correlated channel, under certain communication service guarantees. We show that channel memory has a significant impact on this optimal code-rate. Next, we study the delay in small ad-hoc networks. In particular, we find out what rates can be supported on a small network, when each flow has a certain end-to-end service guarantee. To this end, service guarantee at each intermediate link is characterized. These results are applied to study the potential benefits of setting up a network suitable for network coding in multicast. In particular, we quantify the gains of network coding over classic routing for service provisioned multicast communication over butterfly networks. In the wireless setting, we study the trade-off between communications gains achieved by network coding and the cost to set-up a network enabling network coding. In particular, we show existence of scenarios where one should not attempt to create a network suitable for coding. Insights obtained from these studies are applied to design a distributed rate control algorithm in a large network. This algorithm maximizes sum-utility of all flows, while satisfying per-flow end-to-end service guarantees. We introduce a notion of effective-capacity per communication link that captures the service requirements of flows sharing this link. Each link maintains a price and effective-capacity, and each flow maintains rate and dissatisfaction. Flows and links update their respective variables locally, and we show that their decisions drive the system to an optimal point. We implemented our algorithm on a network simulator and studied its convergence behavior on few networks of practical interest

    On the Queueing Behavior of Random Codes over a Gilbert-Elliot Erasure Channel

    Full text link
    This paper considers the queueing performance of a system that transmits coded data over a time-varying erasure channel. In our model, the queue length and channel state together form a Markov chain that depends on the system parameters. This gives a framework that allows a rigorous analysis of the queue as a function of the code rate. Most prior work in this area either ignores block-length (e.g., fluid models) or assumes error-free communication using finite codes. This work enables one to determine when such assumptions provide good, or bad, approximations of true behavior. Moreover, it offers a new approach to optimize parameters and evaluate performance. This can be valuable for delay-sensitive systems that employ short block lengths.Comment: 5 pages, 4 figures, conferenc

    Adaptive Distributed Stochastic Gradient Descent for Minimizing Delay in the Presence of Stragglers

    Full text link
    We consider the setting where a master wants to run a distributed stochastic gradient descent (SGD) algorithm on nn workers each having a subset of the data. Distributed SGD may suffer from the effect of stragglers, i.e., slow or unresponsive workers who cause delays. One solution studied in the literature is to wait at each iteration for the responses of the fastest k<nk<n workers before updating the model, where kk is a fixed parameter. The choice of the value of kk presents a trade-off between the runtime (i.e., convergence rate) of SGD and the error of the model. Towards optimizing the error-runtime trade-off, we investigate distributed SGD with adaptive kk. We first design an adaptive policy for varying kk that optimizes this trade-off based on an upper bound on the error as a function of the wall-clock time which we derive. Then, we propose an algorithm for adaptive distributed SGD that is based on a statistical heuristic. We implement our algorithm and provide numerical simulations which confirm our intuition and theoretical analysis.Comment: Accepted to IEEE ICASSP 202

    Value-aware resource allocation for service guarantees in networks,”

    Get PDF
    Abstract-The traditional formulation of the total value of information transfer is a multi-commodity flow problem. Each data source is seen as generating a commodity along a fixed route, and the objective is to maximize the total system throughput under some concept of fairness, subject to capacity constraints of the links used. This problem is well studied under the framework of network utility maximization and has led to several different distributed congestion control schemes. However, this view of value does not capture the fact that flows may associate value, not just with throughput, but with link-quality metrics such as packet delay and jitter. In this work, the congestion control problem is redefined to include individual source preferences. It is assumed that degradation in link quality seen by a flow adds up on the links it traverses, and the total utility is maximized in such a way that the end-to-end quality degradation seen by each source is bounded by a value that it declares. Decoupling sourcedissatisfaction and link-degradation through an effective capacity variable, a distributed and provably optimal resource allocation algorithm is designed to maximize system utility subject to these quality constraints. The applicability of the controller in different situations is supported by numerical simulations, and a protocol developed using the controller is simulated on ns-2 to illustrate its performance
    corecore